Goto

Collaborating Authors

 language prototype


OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression

Neural Information Processing Systems

This paper presents a language-powered paradigm for ordinal regression. Existing methods usually treat each rank as a category and employ a set of weights to learn these concepts. These methods are easy to overfit and usually attain unsatisfactory performance as the learned concepts are mainly derived from the training set. Recent large pre-trained vision-language models like CLIP have shown impressive performance on various visual tasks. In this paper, we propose to learn the rank concepts from the rich semantic CLIP latent space. Specifically, we reformulate this task as an image-language matching problem with a contrastive objective, which regards labels as text and obtains a language prototype from a text encoder for each rank. While prompt engineering for CLIP is extremely time-consuming, we propose OrdinalCLIP, a differentiable prompting method for adapting CLIP for ordinal regression.


A Experimental Settings

Neural Information Processing Systems

All experiments were conducted on a single NVIDIA RTX 3090 GPU. The obtained text features were also projected into the CLIP latent space via an FC layer. The test images followed the same process except that the center cropping was used. Besides, the classification accuracy is adopted for Adience. Image Aesthetics Assessment An ImageNet pre-trained VGG-16 was used as the image encoder.



A Experimental Settings All experiments were conducted on a single NVIDIA RTX 3090 GPU

Neural Information Processing Systems

All experiments were conducted on a single NVIDIA RTX 3090 GPU. The obtained text features were also projected into the CLIP latent space via an FC layer. The test images followed the same process except that the center cropping was used. Besides, the classification accuracy is adopted for Adience. Image Aesthetics Assessment An ImageNet pre-trained VGG-16 was used as the image encoder.



OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression

Neural Information Processing Systems

This paper presents a language-powered paradigm for ordinal regression. Existing methods usually treat each rank as a category and employ a set of weights to learn these concepts. These methods are easy to overfit and usually attain unsatisfactory performance as the learned concepts are mainly derived from the training set. Recent large pre-trained vision-language models like CLIP have shown impressive performance on various visual tasks. In this paper, we propose to learn the rank concepts from the rich semantic CLIP latent space. Specifically, we reformulate this task as an image-language matching problem with a contrastive objective, which regards labels as text and obtains a language prototype from a text encoder for each rank.


Semantically Guided Representation Learning For Action Anticipation

Diko, Anxhelo, Avola, Danilo, Prenkaj, Bardh, Fontana, Federico, Cinque, Luigi

arXiv.org Artificial Intelligence

Action anticipation is the task of forecasting future activity from a partially observed sequence of events. However, this task is exposed to intrinsic future uncertainty and the difficulty of reasoning upon interconnected actions. Unlike previous works that focus on extrapolating better visual and temporal information, we concentrate on learning action representations that are aware of their semantic interconnectivity based on prototypical action patterns and contextual co-occurrences. To this end, we propose the novel Semantically Guided Representation Learning (S-GEAR) framework. S-GEAR learns visual action prototypes and leverages language models to structure their relationship, inducing semanticity. To gather insights on S-GEAR's effectiveness, we test it on four action anticipation benchmarks, obtaining improved results compared to previous works: +3.5, +2.7, and +3.5 absolute points on Top-1 Accuracy on Epic-Kitchen 55, EGTEA Gaze+ and 50 Salads, respectively, and +0.8 on Top-5 Recall on Epic-Kitchens 100. We further observe that S-GEAR effectively transfers the geometric associations between actions from language to visual prototypes. Finally, S-GEAR opens new research frontiers in anticipation tasks by demonstrating the intricate impact of action semantic interconnectivity.